Goto

Collaborating Authors

 digital media


Google's SynthID is the latest tool for catching AI-made content. What is AI 'watermarking' and does it work?

AIHub

Note: only the third panel from the original image has been used here. Last month, Google announced SynthID Detector, a new tool to detect AI-generated content. Google claims it can identify AI-generated content in text, image, video or audio. But there are some caveats. One of them is that the tool is currently only available to "early testers" through a waitlist.


Generative AI is already being used in journalism – here's how people feel about it

AIHub

Generative artificial intelligence (AI) has taken off at lightning speed in the past couple of years, creating disruption in many industries. A new report published this week finds that news audiences and journalists alike are concerned about how news organisations are – and could be – using generative AI such as chatbots, image, audio and video generators, and similar tools. The report draws on three years of interviews and focus group research into generative AI and journalism in Australia and six other countries (United States, United Kingdom, Norway, Switzerland, Germany and France). Only 25% of our news audience participants were confident they had encountered generative AI in journalism. About 50% were unsure or suspected they had.


Automating the Analysis of Public Saliency and Attitudes towards Biodiversity from Digital Media

Giebink, Noah, Gupta, Amrita, Verìssimo, Diogo, Chang, Charlotte H., Chang, Tony, Brennan, Angela, Dickson, Brett, Bowmer, Alex, Baillie, Jonathan

arXiv.org Artificial Intelligence

Measuring public attitudes toward wildlife provides crucial insights into our relationship with nature and helps monitor progress toward Global Biodiversity Framework targets. Yet, conducting such assessments at a global scale is challenging. Manually curating search terms for querying news and social media is tedious, costly, and can lead to biased results. Raw news and social media data returned from queries are often cluttered with irrelevant content and syndicated articles. We aim to overcome these challenges by leveraging modern Natural Language Processing (NLP) tools. We introduce a folk taxonomy approach for improved search term generation and employ cosine similarity on Term Frequency-Inverse Document Frequency vectors to filter syndicated articles. We also introduce an extensible relevance filtering pipeline which uses unsupervised learning to reveal common topics, followed by an open-source zero-shot Large Language Model (LLM) to assign topics to news article titles, which are then used to assign relevance. Finally, we conduct sentiment, topic, and volume analyses on resulting data. We illustrate our methodology with a case study of news and X (formerly Twitter) data before and during the COVID-19 pandemic for various mammal taxa, including bats, pangolins, elephants, and gorillas. During the data collection period, up to 62% of articles including keywords pertaining to bats were deemed irrelevant to biodiversity, underscoring the importance of relevance filtering. At the pandemic's onset, we observed increased volume and a significant sentiment shift toward horseshoe bats, which were implicated in the pandemic, but not for other focal taxa. The proposed methods open the door to conservation practitioners applying modern and emerging NLP tools, including LLMs "out of the box," to analyze public perceptions of biodiversity during current events or campaigns.


The Next Tech Backlash Will Be About Hygiene

TIME - Tech

For centuries it was biology that made humans sick. Today, it is often stress. So argues Dr Gabor Maté about the unrecognized toll that "normal" modern life has on your mental and physical health. Dr. Maté's research, which struck a chord in 2023, invites reflection on the roll out of generative AI into daily life in 2024. As half of British teens report feeling addicted to social media, and as the U.S. surgeon general offers a rare caution against its health risks, the infusion of generative AI into social media appears to threaten our basic hygiene, meaning "the conditions or practices conducive to maintaining health and preventing disease."


How Max Tani Became the Go-To Guy for Horrible News About Media Layoffs

Slate

Maxwell Tani is known for his work on an obituary beat of sorts. A media reporter at Semafor, he always seems to be the first person to break news whenever something terrible happens for journalists at one outlet or another. He's been busy: According to one tabulation, more than 500 journalists were laid off just in January. A scroll through Tani's account on X surfaces a glut of executive memos, couched in corporate-speak, informing staff that they'll soon be laid off--at Business Insider, Engadget, the Messenger, Vice, and the Wall Street Journal. Sometimes he shares the news of an impending layoff before these memos even go out--and before employees have been informed. Slate spoke with Tani about what it's like to document the worst moments on the media beat, and how he feels about his place in the news-about-the-news ecosystem. We also tried to diagnose the ills of the industry--and find bright spots ahead.


Breaking the Deepfake Spell: The Magic of Blockchain, NFTs, and Machine Learning

#artificialintelligence

The rise of deepfakes, digitally manipulated content created using artificial intelligence, poses a significant threat to the authenticity of digital media. To combat this issue, blockchain, NFTs, and machine learning are potentially the main pillars to authenticate and verify digital content. By combining these technologies, a more robust, sustainable, and decentralized approach can be created to counter deepfakes. Blockchain technology provides a secure and tamper-proof way to verify the authenticity of digital media. By storing it on a distributed ledger, it becomes impossible to alter or manipulate the original content. This technology can be applied to both image and video content, ensuring that every step in the creation, distribution, and storage process is secured.


Senior Data Engineer at Publicis Groupe - Chicago, IL, United States

#artificialintelligence

Epsilon is the leader in outcome-based marketing. We enable marketing that's built on proof, not promises. Through Epsilon PeopleCloud, the marketing platform for personalizing consumer journeys with performance transparency, Epsilon helps marketers anticipate, activate and prove measurable business outcomes. Powered by CORE ID, the most accurate and stable identity management platform representing 200 million people, Epsilon's award-winning data and technology is rooted in privacy by design and underpinned by powerful AI. With more than 50 years of experience in personalization and performance working with the world's top brands, agencies and publishers, Epsilon is a trusted partner leading CRM, digital media, loyalty and email programs.


What's next for the metaverse in 2023? - Verdict

#artificialintelligence

The metaverse has been making waves as the next big thing in digital media for months. However, its potential is arguably on every tech-head's mind as we usher in 2023. Research firm GlobalData defines the metaverse as a virtual world where users can share experiences and interact in real-time within simulated scenarios. Its core technologies are vast, but mainly include virtual reality (VR), artificial intelligence (AI) and augmented reality (AR). "Although the metaverse is in the early stages of development, it has the potential to be the next mega-theme in digital media," GlobalData analysts write in a report, "the metaverse could transform how people work, shop, interact, and consume content."


Artificial Intelligence is expanding human creativity – and CQU Digital Media

#artificialintelligence

A CQUniversity Lecturer in Digital Media is delving into the stunning and strange world of Artificial Intelligence (AI) Art, with a plan to transform digital media courses at CQU. The talented Brendan Murphy has been experimenting with DALL·E 2, which is an AI system that can create realistic images and art from text descriptions. Mr Murphy said the system works by analysing how humans have described images and taps into its memory to produce artwork. "DALL·E 2 is like an art Uber driver who will navigate you through a very complex space to get to the image you want," Mr Murphy said. "The key part of the system is its roadmap, built from AI analysis of a plethora of paired images and text descriptions. "I think the system initially set out to capture subjects, styles and genres, but because human written captions or descriptions accompany every image it analysed, the system also captured moods and emotions.


Spiral Language Modeling

Cao, Yong, Feng, Yukun, Kuang, Shaohui, Xu, Gu

arXiv.org Artificial Intelligence

In almost all text generation applications, word sequences are constructed in a left-to-right (L2R) or right-to-left (R2L) manner, as natural language sentences are written either L2R or R2L. However, we find that the natural language written order is not essential for text generation. In this paper, we propose Spiral Language Modeling (SLM), a general approach that enables one to construct natural language sentences beyond the L2R and R2L order. SLM allows one to form natural language text by starting from an arbitrary token inside the result text and expanding the rest tokens around the selected ones. It makes the decoding order a new optimization objective besides the language model perplexity, which further improves the diversity and quality of the generated text. Furthermore, SLM makes it possible to manipulate the text construction process by selecting a proper starting token. SLM also introduces generation orderings as additional regularization to improve model robustness in low-resource scenarios. Experiments on 8 widely studied Neural Machine Translation (NMT) tasks show that SLM is constantly effective with up to 4.7 BLEU increase comparing to the conventional L2R decoding approach.